Online and co-regularized algorithms for large scale learning

نویسنده

  • Tom de Ruijter
چکیده

In this work I address the issue of large scale learning in an online setting. To tackle it, I introduce a novel algorithm that enables semi-supervised learning in an online fashion. By combining state-of-the-art online methods such as Pegasos [3] with the multi-view co-regularization framework, I achieve significantly better performance on regression and binary classification tasks. This shows that incorporation of unlabeled data is still practical even in large scale and online settings. Evaluation is done on several publicly available datasets from the UCI and LibSVM repositories. To evaluate results in a practical setting, I also consider a difficult natural language set from the BioInfer corpus [18]. In this setting the introduced algorithm outperforms current state-of-the-art methods for online learning. The main contents of this work will also be presented at the conference Discovery Science 2012 and contained in Lectures of Computer Science [21].

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Large Scale Co-Regularized Ranking

As unlabeled data is usually easy to collect, semisupervised learning algorithms that can be trained on large amounts of unlabeled and labeled data are becoming increasingly popular for ranking and preference learning problems [6, 23, 8, 21]. However, the computational complexity of the vast majority of these (pairwise) ranking and preference learning methods is super-linear, as optimizing an o...

متن کامل

Online Co-regularized Algorithms

We propose an online co-regularized learning algorithm for classification and regression tasks. We demonstrate that by sequentially co-regularizing prediction functions on unlabeled data points, our algorithm provides improved performance in comparison to supervised methods on several UCI benchmarks and a real world natural language processing dataset. The presented algorithm is particularly ap...

متن کامل

Fast Implementation of l 1 Regularized Learning Algorithms Using Gradient Descent Methods ∗

With the advent of high-throughput technologies, l1 regularized learning algorithms have attracted much attention recently. Dozens of algorithms have been proposed for fast implementation, using various advanced optimization techniques. In this paper, we demonstrate that l1 regularized learning problems can be easily solved by using gradient-descent techniques. The basic idea is to transform a ...

متن کامل

(Online) Subgradient Methods for Structured Prediction

Promising approaches to structured learning problems have recently been developed in the maximum margin framework. Unfortunately, algorithms that are computationally and memory efficient enough to solve large scale problems have lagged behind. We propose using simple subgradient-based techniques for optimizing a regularized risk formulation of these problems in both online and batch settings, a...

متن کامل

Fast Implementation of ℓ1Regularized Learning Algorithms Using Gradient Descent Methods

With the advent of high-throughput technologies, l1 regularized learning algorithms have attracted much attention recently. Dozens of algorithms have been proposed for fast implementation, using various advanced optimization techniques. In this paper, we demonstrate that l1 regularized learning problems can be easily solved by using gradient-descent techniques. The basic idea is to transform a ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2012